Online Learning Over Dynamic Graphs via Distributed Proximal Gradient Algorithm

نویسندگان

چکیده

We consider the problem of tracking minimum a time-varying convex optimization over dynamic graph. Motivated by target and parameter estimation problems in intermittently connected robotic sensor networks, goal is to design distributed algorithm capable handling nondifferentiable regularization penalties. The proposed proximal online gradient descent built run fully decentralized manner utilizes consensus updates possibly disconnected graphs. performance analyzed developing bounds on its regret terms cumulative path length optimum. It shown that as compared centralized case, incurred $T$ time slots worse factor notation="LaTeX">$\log (T)$ only, despite network topology. empirical tested sparse recovery problem, where it incur close algorithm.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Online gradient descent learning algorithm†

This paper considers the least-square online gradient descent algorithm in a reproducing kernel Hilbert space (RKHS) without an explicit regularization term. We present a novel capacity independent approach to derive error bounds and convergence results for this algorithm. The essential element in our analysis is the interplay between the generalization error and a weighted cumulative error whi...

متن کامل

The proximal-proximal gradient algorithm

We consider the problem of minimizing a convex objective which is the sum of a smooth part, with Lipschitz continuous gradient, and a nonsmooth part. Inspired by various applications, we focus on the case when the nonsmooth part is a composition of a proper closed convex function P and a nonzero affine map, with the proximal mappings of τP , τ > 0, easy to compute. In this case, a direct applic...

متن کامل

Representation Learning over Dynamic Graphs

How can we effectively encode evolving information over dynamic graphs into low-dimensional representations? In this paper, we propose DyRep – an inductive deep representation learning framework that learns a set of functions to efficiently produce low-dimensional node embeddings that evolves over time. The learned embeddings drive the dynamics of two key processes namely, communication and ass...

متن کامل

Snake: a Stochastic Proximal Gradient Algorithm for Regularized Problems over Large Graphs

A regularized optimization problem over a large unstructured graph is studied, where the regularization term is tied to the graph geometry. Typical regularization examples include the total variation and the Laplacian regularizations over the graph. When applying the proximal gradient algorithm to solve this problem, there exist quite affordable methods to implement the proximity operator (back...

متن کامل

Sparse Online Learning via Truncated Gradient

We propose a general method called truncated gradient to induce sparsity in the weights of online-learning algorithms with convex loss. This method has several essential properties. First, the degree of sparsity is continuous—a parameter controls the rate of sparsification from no sparsification to total sparsification. Second, the approach is theoretically motivated, and an instance of it can ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Automatic Control

سال: 2021

ISSN: ['0018-9286', '1558-2523', '2334-3303']

DOI: https://doi.org/10.1109/tac.2020.3033712